3,232 research outputs found
EXPLAINING LATERALITY
Working with multi-species allometric relations and drawing on mammalian theorist Denenberg’s works, I provide an explanatory theory of the mammalian dual-brain as no prior account has
Generalised Mixability, Constant Regret, and Bayesian Updating
Mixability of a loss is known to characterise when constant regret bounds are
achievable in games of prediction with expert advice through the use of Vovk's
aggregating algorithm. We provide a new interpretation of mixability via convex
analysis that highlights the role of the Kullback-Leibler divergence in its
definition. This naturally generalises to what we call -mixability where
the Bregman divergence replaces the KL divergence. We prove that
losses that are -mixable also enjoy constant regret bounds via a
generalised aggregating algorithm that is similar to mirror descent.Comment: 12 page
Generalized Mixability via Entropic Duality
Mixability is a property of a loss which characterizes when fast convergence
is possible in the game of prediction with expert advice. We show that a key
property of mixability generalizes, and the exp and log operations present in
the usual theory are not as special as one might have thought. In doing this we
introduce a more general notion of -mixability where is a general
entropy (\ie, any convex function on probabilities). We show how a property
shared by the convex dual of any such entropy yields a natural algorithm (the
minimizer of a regret bound) which, analogous to the classical aggregating
algorithm, is guaranteed a constant regret when used with -mixable
losses. We characterize precisely which have -mixable losses and
put forward a number of conjectures about the optimality and relationships
between different choices of entropy.Comment: 20 pages, 1 figure. Supersedes the work in arXiv:1403.2433 [cs.LG
- …